20 research outputs found

    Contribution of cats and dogs to SARS-CoV-2 transmission in households

    Get PDF
    INTRODUCTION: SARS-CoV-2 is known to jump across species. The occurrence of transmission in households between humans and companion animals has been shown, but the contribution of companion animals to the overall transmission within a household is unknown. The basic reproduction number ( R 0) is an important indicator to quantify transmission. For a pathogen with multiple host species, such as SARS-CoV-2, the basic reproduction number needs to be calculated from the partial reproduction numbers for each combination of host species. METHOD: In this study, the basic and partial reproduction numbers for SARS-CoV-2 were estimated by reanalyzing a survey of Dutch households with dogs and cats and minimally one SARS-CoV-2-infected human. RESULTS: For households with cats, a clear correlation between the number of cats and the basic reproduction number (Spearman's correlation: p 0.40, p-value: 1.4 × 10 -5) was identified, while for dogs, the correlation was smaller and not significant (Spearman's correlation: p 0.12, p-value: 0.21). Partial reproduction numbers from cats or dogs to humans were 0.3 (0.0-2.0) and 0.3 (0.0-3.5) and from humans to cats or dogs were 0.6 (0.4-0.8) and 0.6 (0.4-0.9). DISCUSSION: Thus, the estimations of within-household transmission indicated the likelihood of transmission from these companion animals to humans and vice versa, but the observational nature of this study limited the ability to establish conclusive evidence. This study's findings support the advice provided during the pandemic to COVID-19 patients to maintain distance from companion animals as a precautionary measure and given the possibility of transmission, although there is an overall relatively limited impact on the pandemic when compared to human-to-human transmission

    CeRebrUm and CardIac Protection with ALlopurinol in Neonates with Critical Congenital Heart Disease Requiring Cardiac Surgery with Cardiopulmonary Bypass (CRUCIAL):study protocol of a phase III, randomized, quadruple-blinded, placebo-controlled, Dutch multicenter trial

    Get PDF
    BACKGROUND: Neonates with critical congenital heart disease (CCHD) undergoing cardiac surgery with cardiopulmonary bypass (CPB) are at risk of brain injury that may result in adverse neurodevelopment. To date, no therapy is available to improve long-term neurodevelopmental outcomes of CCHD neonates. Allopurinol, a xanthine oxidase inhibitor, prevents the formation of reactive oxygen and nitrogen species, thereby limiting cell damage during reperfusion and reoxygenation to the brain and heart. Animal and neonatal studies suggest that allopurinol reduces hypoxic-ischemic brain injury and is cardioprotective and safe. This trial aims to test the hypothesis that allopurinol administration in CCHD neonates will result in a 20% reduction in moderate to severe ischemic and hemorrhagic brain injury. METHODS: This is a phase III, randomized, quadruple-blinded, placebo-controlled, multicenter trial. Neonates with a prenatal or postnatal CCHD diagnosis requiring cardiac surgery with CPB in the first 4 weeks after birth are eligible to participate. Allopurinol or mannitol-placebo will be administered intravenously in 2 doses early postnatally in neonates diagnosed antenatally and 3 doses perioperatively of 20 mg/kg each in all neonates. The primary outcome is a composite endpoint of moderate/severe ischemic or hemorrhagic brain injury on early postoperative MRI, being too unstable for postoperative MRI, or mortality within 1 month following CPB. A total of 236 patients (n = 188 with prenatal diagnosis) is required to demonstrate a reduction of the primary outcome incidence by 20% in the prenatal group and by 9% in the postnatal group (power 80%; overall type 1 error controlled at 5%, two-sided), including 1 interim analysis at n = 118 (n = 94 with prenatal diagnosis) with the option to stop early for efficacy. Secondary outcomes include preoperative and postoperative brain injury severity, white matter injury volume (MRI), and cardiac function (echocardiography); postnatal and postoperative seizure activity (aEEG) and regional cerebral oxygen saturation (NIRS); neurodevelopment at 3 months (general movements); motor, cognitive, and language development and quality of life at 24 months; and safety and cost-effectiveness of allopurinol. DISCUSSION: This trial will investigate whether allopurinol administered directly after birth and around cardiac surgery reduces moderate/severe ischemic and hemorrhagic brain injury and improves cardiac function and neurodevelopmental outcome in CCHD neonates. TRIAL REGISTRATION: EudraCT 2017-004596-31. Registered on November 14, 2017. ClinicalTrials.gov NCT04217421. Registered on January 3, 2020 SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s13063-022-06098-y

    Time to focus on outcome assessment tools for childhood vasculitis

    Get PDF
    Childhood systemic vasculitides are a group of rare diseases with multi-organ involvement and potentially devastating consequences. After establishment of new classification criteria (Ankara consensus conference in 2008), it is now time to establish measures for proper definition of activity and damage in childhood primary vasculitis. By comparison to adult vasculitis, there is no consensus for indices of activity and damage assessment in childhood vasculitis. Assessment of disease activity is likely to become a major area of interest in pediatric rheumatology in the near future. After defining the classification criteria for primary systemic childhood vasculitis, the next step was to perform a validation study using the original Birmingham vasculitis activity score as well as the disease extent index to measure disease activity in childhood vasculitis. Presently, there are efforts in place to develop a pediatric vasculitis activity score. This paper reviews the current understanding about the assessment tools (i.e., clinical features, laboratory tests, radiologic assessments, etc.) widely used for evaluation of the disease activity and damage status of the children with vasculitis

    Development and Validation of a Risk Score for Chronic Kidney Disease in HIV Infection Using Prospective Cohort Data from the D:A:D Study

    Get PDF
    Ristola M. on työryhmien DAD Study Grp ; Royal Free Hosp Clin Cohort ; INSIGHT Study Grp ; SMART Study Grp ; ESPRIT Study Grp jäsen.Background Chronic kidney disease (CKD) is a major health issue for HIV-positive individuals, associated with increased morbidity and mortality. Development and implementation of a risk score model for CKD would allow comparison of the risks and benefits of adding potentially nephrotoxic antiretrovirals to a treatment regimen and would identify those at greatest risk of CKD. The aims of this study were to develop a simple, externally validated, and widely applicable long-term risk score model for CKD in HIV-positive individuals that can guide decision making in clinical practice. Methods and Findings A total of 17,954 HIV-positive individuals from the Data Collection on Adverse Events of Anti-HIV Drugs (D:A:D) study with >= 3 estimated glomerular filtration rate (eGFR) values after 1 January 2004 were included. Baseline was defined as the first eGFR > 60 ml/min/1.73 m2 after 1 January 2004; individuals with exposure to tenofovir, atazanavir, atazanavir/ritonavir, lopinavir/ritonavir, other boosted protease inhibitors before baseline were excluded. CKD was defined as confirmed (>3 mo apart) eGFR In the D:A:D study, 641 individuals developed CKD during 103,185 person-years of follow-up (PYFU; incidence 6.2/1,000 PYFU, 95% CI 5.7-6.7; median follow-up 6.1 y, range 0.3-9.1 y). Older age, intravenous drug use, hepatitis C coinfection, lower baseline eGFR, female gender, lower CD4 count nadir, hypertension, diabetes, and cardiovascular disease (CVD) predicted CKD. The adjusted incidence rate ratios of these nine categorical variables were scaled and summed to create the risk score. The median risk score at baseline was -2 (interquartile range -4 to 2). There was a 1: 393 chance of developing CKD in the next 5 y in the low risk group (risk score = 5, 505 events), respectively. Number needed to harm (NNTH) at 5 y when starting unboosted atazanavir or lopinavir/ritonavir among those with a low risk score was 1,702 (95% CI 1,166-3,367); NNTH was 202 (95% CI 159-278) and 21 (95% CI 19-23), respectively, for those with a medium and high risk score. NNTH was 739 (95% CI 506-1462), 88 (95% CI 69-121), and 9 (95% CI 8-10) for those with a low, medium, and high risk score, respectively, starting tenofovir, atazanavir/ritonavir, or another boosted protease inhibitor. The Royal Free Hospital Clinic Cohort included 2,548 individuals, of whom 94 individuals developed CKD (3.7%) during 18,376 PYFU (median follow-up 7.4 y, range 0.3-12.7 y). Of 2,013 individuals included from the SMART/ESPRIT control arms, 32 individuals developed CKD (1.6%) during 8,452 PYFU (median follow-up 4.1 y, range 0.6-8.1 y). External validation showed that the risk score predicted well in these cohorts. Limitations of this study included limited data on race and no information on proteinuria. Conclusions Both traditional and HIV-related risk factors were predictive of CKD. These factors were used to develop a risk score for CKD in HIV infection, externally validated, that has direct clinical relevance for patients and clinicians to weigh the benefits of certain antiretrovirals against the risk of CKD and to identify those at greatest risk of CKD.Peer reviewe

    Non-AIDS defining cancers in the D:A:D Study-time trends and predictors of survival : a cohort study

    Get PDF
    BACKGROUND:Non-AIDS defining cancers (NADC) are an important cause of morbidity and mortality in HIV-positive individuals. Using data from a large international cohort of HIV-positive individuals, we described the incidence of NADC from 2004-2010, and described subsequent mortality and predictors of these.METHODS:Individuals were followed from 1st January 2004/enrolment in study, until the earliest of a new NADC, 1st February 2010, death or six months after the patient's last visit. Incidence rates were estimated for each year of follow-up, overall and stratified by gender, age and mode of HIV acquisition. Cumulative risk of mortality following NADC diagnosis was summarised using Kaplan-Meier methods, with follow-up for these analyses from the date of NADC diagnosis until the patient's death, 1st February 2010 or 6 months after the patient's last visit. Factors associated with mortality following NADC diagnosis were identified using multivariable Cox proportional hazards regression.RESULTS:Over 176,775 person-years (PY), 880 (2.1%) patients developed a new NADC (incidence: 4.98/1000PY [95% confidence interval 4.65, 5.31]). Over a third of these patients (327, 37.2%) had died by 1st February 2010. Time trends for lung cancer, anal cancer and Hodgkin's lymphoma were broadly consistent. Kaplan-Meier cumulative mortality estimates at 1, 3 and 5 years after NADC diagnosis were 28.2% [95% CI 25.1-31.2], 42.0% [38.2-45.8] and 47.3% [42.4-52.2], respectively. Significant predictors of poorer survival after diagnosis of NADC were lung cancer (compared to other cancer types), male gender, non-white ethnicity, and smoking status. Later year of diagnosis and higher CD4 count at NADC diagnosis were associated with improved survival. The incidence of NADC remained stable over the period 2004-2010 in this large observational cohort.CONCLUSIONS:The prognosis after diagnosis of NADC, in particular lung cancer and disseminated cancer, is poor but has improved somewhat over time. Modifiable risk factors, such as smoking and low CD4 counts, were associated with mortality following a diagnosis of NADC

    Scientific Opinion on welfare aspects of the use of perches for laying hens

    No full text
    This opinion investigated the use of perches for laying hens in cage and non-cage systems. It is based on various activities reviewing the effects of perch height and design on hen health and welfare. Systematic and extensive literature reviews were conducted to assess the scientific evidence about hen motivation to grasp and seek elevation, and the appropriate height of perches as well as other features (position, material, colour, temperature, shape, width and length). In addition, an expert knowledge elicitation (EKE) exercise was run with technical hearing experts to discuss and prioritise the various design aspects of perches. Overall, the body of literature on perches is limited. Relevant features of perches are often confounded with others. In the literature, the most commonly used animal-based measures to assess perch adequacy are keel bone damages, foot pad lesions and perch use by hens. Overall, hens seek elevation during the day as well as during the night, when they select a site for roosting. Elevated perches allow hens to monitor the environment, to escape from other hens, avoid disturbances and improve thermoregulation. For night-time roosting hens show a preference for perches higher than 60cm compared with lower perches. However, elevated perches can have negative consequences with increased prevalence of keel deformities and fractures. The risk of injury increases when hens have to jump a distance of more than 80cm vertically, horizontally or diagonally to reach or leave a perch, or jump an angle between 45 and 90\ub0 (measured at the horizontal plane). Material, shape, length and width of the perch also influence perch preference by hens. The EKE exercise suggests that an adequate perch is elevated, accessible and functional (providing sufficient overview). The opinion concludes that for the design of an adequate perch, different features of perches need to be further investigated and integrated

    Virological responses to lamivudine or emtricitabine when combined with tenofovir and a protease inhibitor in treatment-naïve HIV-1-infected patients in the Dutch AIDS Therapy Evaluation in the Netherlands (ATHENA) cohort

    No full text
    Objectives: Lamivudine (3TC) and emtricitabine (FTC) are considered interchangeable in recommended tenofovir disoproxil-fumarate (TDF)-containing combination antiretroviral therapies (cARTs). This statement of equivalence has not been systematically studied. We compared the treatment responses to 3TC and FTC combined with TDF in boosted protease inhibitor (PI)-based cART for HIV-1-infected patients. Methods: An observational study in the AIDS Therapy Evaluation in the Netherlands (ATHENA) cohort was carried out between 2002 and 2013. Virological failure rates, time to HIV RNA suppression <400 copies/mL, and time to treatment failure were analysed using multivariable logistic regression and Cox proportional hazard models. Sensitivity analyses included propensity score-adjusted models. Results: A total of 1582 ART-naïve HIV-1-infected patients initiated 3TC or FTC with TDF and ritonavir-boosted darunavir (29.6%), atazanavir (41.5%), lopinavir (27.1%) or another PI (1.8%). Week 48 virological failure rates on 3TC and FTC were comparable (8.9% and 5.6%, respectively; P = 0.208). The multivariable adjusted odds ratio of virological failure when using 3TC instead of FTC with TDF in PI-based cART was 0.75 [95% confidence interval (CI) 0.32–1.79; P = 0.51]. Propensity score-adjusted models showed comparable results. The adjusted hazard ratio (HR) for treatment failure of 3TC compared with FTC was 1.15 (95% CI 0.58–2.27) within 240 weeks after cART initiation. The time to two consecutive HIV RNA measurements <400 copies/mL within 48 weeks (HR 0.94; 95% CI 0.78–1.16) and the time to treatment failure after suppression <400 copies/mL (HR 0.94; 95% CI 0.36–2.50) were not significantly influenced by the use of 3TC in TDF/PI-containing cART. Conclusions: The virological responses were not significantly different in treatment-naïve HIV-1-infected patients starting either 3TC/TDF or FTC/TDF and a ritonavir-boosted PI

    Virological responses to lamivudine or emtricitabine when combined with tenofovir and a protease inhibitor in treatment-naïve HIV-1-infected patients in the Dutch AIDS Therapy Evaluation in the Netherlands (ATHENA) cohort

    No full text
    Objectives: Lamivudine (3TC) and emtricitabine (FTC) are considered interchangeable in recommended tenofovir disoproxil-fumarate (TDF)-containing combination antiretroviral therapies (cARTs). This statement of equivalence has not been systematically studied. We compared the treatment responses to 3TC and FTC combined with TDF in boosted protease inhibitor (PI)-based cART for HIV-1-infected patients. Methods: An observational study in the AIDS Therapy Evaluation in the Netherlands (ATHENA) cohort was carried out between 2002 and 2013. Virological failure rates, time to HIV RNA suppression <400 copies/mL, and time to treatment failure were analysed using multivariable logistic regression and Cox proportional hazard models. Sensitivity analyses included propensity score-adjusted models. Results: A total of 1582 ART-naïve HIV-1-infected patients initiated 3TC or FTC with TDF and ritonavir-boosted darunavir (29.6%), atazanavir (41.5%), lopinavir (27.1%) or another PI (1.8%). Week 48 virological failure rates on 3TC and FTC were comparable (8.9% and 5.6%, respectively; P = 0.208). The multivariable adjusted odds ratio of virological failure when using 3TC instead of FTC with TDF in PI-based cART was 0.75 [95% confidence interval (CI) 0.32–1.79; P = 0.51]. Propensity score-adjusted models showed comparable results. The adjusted hazard ratio (HR) for treatment failure of 3TC compared with FTC was 1.15 (95% CI 0.58–2.27) within 240 weeks after cART initiation. The time to two consecutive HIV RNA measurements <400 copies/mL within 48 weeks (HR 0.94; 95% CI 0.78–1.16) and the time to treatment failure after suppression <400 copies/mL (HR 0.94; 95% CI 0.36–2.50) were not significantly influenced by the use of 3TC in TDF/PI-containing cART. Conclusions: The virological responses were not significantly different in treatment-naïve HIV-1-infected patients starting either 3TC/TDF or FTC/TDF and a ritonavir-boosted PI
    corecore